Mules don't get caught one transaction at a time. They get caught in the ring.
Soft-kernel scoring instead of cliff-edge rules. A temporal graph instead of isolated alerts. Counterfactual evidence instead of opaque scores — STR-ready, by default.
Every threshold is a cliff. Mules learn it. Honest customers fall off it. Investigators stitch context across systems by hand. By the time the ring is mapped, the funds are layered three accounts deep — and the regulator wants to know why the rule that was tuned last quarter missed it.
Every disruption pattern in mule fraud — pass-through flows, threshold evasion, ring topologies, device clusters, identity collisions, watchlist contagion — runs as a soft-kernel scorecard on the same temporal graph. No cliffs to tune around. The composite stays calibrated as mules adapt.
Most rules engines flag a single suspicious payment, queue it for review, and lose the ring around it. By the time a human stitches the network, the funds are layered three accounts deep and the herder has spun up a new wave.
Real mule detection lives in the math between rules: continuous membership instead of binary cliffs, peer cohorts learned nightly, scorecards ensembling into a composite that traces back to evidence. A model dropped on raw transactions answers fast. It also misses the ring.
Twenty soft-kernel scorecards, ensembling into a composite. Three of them tell the story: the rhythm of behaviour, the no-cliff curve, and the fan-in that exposes the herder.